Installing and calling required packages
#install.packages(c("lavaan","semPlot","corrplot"))
#install.packages("semTools")
library(lavaan)
## This is lavaan 0.6-18
## lavaan is FREE software! Please report any bugs.
library(semPlot)
library(corrplot)
## corrplot 0.95 loaded
library(semTools)
##
## ###############################################################################
## This is semTools 0.5-6
## All users of R (or SEM) are invited to submit functions or ideas for functions.
## ###############################################################################
library(ggplot2)
library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr 1.1.4 ✔ readr 2.1.5
## ✔ forcats 1.0.0 ✔ stringr 1.5.2
## ✔ lubridate 1.9.4 ✔ tibble 3.2.1
## ✔ purrr 1.0.4 ✔ tidyr 1.3.1
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ readr::clipboard() masks semTools::clipboard()
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
library(lavaanPlot)
library(tidySEM)
## Registered S3 method overwritten by 'tidySEM':
## method from
## predict.MxModel OpenMx
library(psych)
##
## Attaching package: 'psych'
##
## The following objects are masked from 'package:ggplot2':
##
## %+%, alpha
##
## The following objects are masked from 'package:semTools':
##
## reliability, skew
##
## The following object is masked from 'package:lavaan':
##
## cor2cov
# set work directory it if needed
# load in data
fomo<- read.csv("fomo.csv")
str(fomo)
## 'data.frame': 250 obs. of 5 variables:
## $ AnxAtt : int 34 60 54 49 36 48 15 19 42 42 ...
## $ Boredom: int 26 38 31 39 30 32 8 13 24 22 ...
## $ Anxiety: int 13 18 12 20 7 17 3 4 15 21 ...
## $ Dep : int 3 21 14 21 6 21 2 6 21 15 ...
## $ FOMO : int 24 33 26 28 23 28 11 16 19 32 ...
names(fomo)
## [1] "AnxAtt" "Boredom" "Anxiety" "Dep" "FOMO"
# this part is used to demonstrate that path analysis is equivalent to regression
lm1<- lm(Boredom~ Anxiety + Dep, data=fomo)
summary(lm1)
##
## Call:
## lm(formula = Boredom ~ Anxiety + Dep, data = fomo)
##
## Residuals:
## Min 1Q Median 3Q Max
## -14.8446 -5.0499 0.1525 4.3756 18.3748
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 17.9558 0.9696 18.519 < 2e-16 ***
## Anxiety 0.3412 0.1003 3.403 0.000778 ***
## Dep 0.5582 0.1055 5.293 2.65e-07 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 6.311 on 247 degrees of freedom
## Multiple R-squared: 0.4439, Adjusted R-squared: 0.4394
## F-statistic: 98.58 on 2 and 247 DF, p-value: < 2.2e-16
#model specification
m1<-'
Boredom~ Anxiety + Dep
Anxiety ~~ Dep
Anxiety ~~ Anxiety
Dep ~~ Dep
'
#model fit
fit1_PA <- sem(m1,data=fomo)
summary(fit1_PA, fit.measures=T, standardized=T, rsquare=T)
## lavaan 0.6-18 ended normally after 22 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 6
##
## Number of observations 250
##
## Model Test User Model:
##
## Test statistic 0.000
## Degrees of freedom 0
##
## Model Test Baseline Model:
##
## Test statistic 410.334
## Degrees of freedom 3
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 1.000
## Tucker-Lewis Index (TLI) 1.000
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -2333.155
## Loglikelihood unrestricted model (H1) -2333.155
##
## Akaike (AIC) 4678.310
## Bayesian (BIC) 4699.439
## Sample-size adjusted Bayesian (SABIC) 4680.418
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.000
## 90 Percent confidence interval - lower 0.000
## 90 Percent confidence interval - upper 0.000
## P-value H_0: RMSEA <= 0.050 NA
## P-value H_0: RMSEA >= 0.080 NA
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.000
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Boredom ~
## Anxiety 0.341 0.100 3.423 0.001 0.341 0.274
## Dep 0.558 0.105 5.325 0.000 0.558 0.426
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Anxiety ~~
## Dep 34.908 3.515 9.932 0.000 34.908 0.807
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Anxiety 45.474 4.067 11.180 0.000 45.474 1.000
## Dep 41.122 3.678 11.180 0.000 41.122 1.000
## .Boredom 39.350 3.520 11.180 0.000 39.350 0.556
##
## R-Square:
## Estimate
## Boredom 0.444
Both linear regression and path analysis have the same estimates (anxiety = .34; Dep = .56, etc.)
#Histograms
ggplot(gather(fomo), aes(value)) +
geom_histogram(bins = 10) +
facet_wrap(~key, scales = 'free_x')
#Mardia test of multivariate normality
mardia(fomo)
## Call: mardia(x = fomo)
##
## Mardia tests of multivariate skew and kurtosis
## Use describe(x) the to get univariate tests
## n.obs = 250 num.vars = 5
## b1p = 1.38 skew = 57.46 with probability <= 0.0097
## small sample skew = 58.38 with probability <= 0.0079
## b2p = 32.5 kurtosis = -2.36 with probability <= 0.018
From the output, we can see that variable Anxiety and Depression are not normally distributed. We can consider transformations for Anxiety and Depression referring to previous labs covered in GLM classes #### Assumption 2: Sample size
str(fomo)
## 'data.frame': 250 obs. of 5 variables:
## $ AnxAtt : int 34 60 54 49 36 48 15 19 42 42 ...
## $ Boredom: int 26 38 31 39 30 32 8 13 24 22 ...
## $ Anxiety: int 13 18 12 20 7 17 3 4 15 21 ...
## $ Dep : int 3 21 14 21 6 21 2 6 21 15 ...
## $ FOMO : int 24 33 26 28 23 28 11 16 19 32 ...
A rule of thumb when ML (maximum likelihood) method is used for estimation is: The ratio of sample size (N) to # parameters (q) should be larger than 5:1 or 10:1.
#Typically a p-value that is less than .001 is considered to be an outlier based on Mahalanobis distance
fomo$mahal<-mahalanobis(fomo, colMeans(fomo), cov(fomo))
df <- ncol(fomo)
fomo$p <- pchisq(fomo$mahal, df=df, lower.tail=FALSE)
fomo
## AnxAtt Boredom Anxiety Dep FOMO mahal p
## 1 34 26 13 3 24 8.8280210 0.183483562
## 2 60 38 18 21 33 2.5315873 0.864915927
## 3 54 31 12 14 26 2.9659618 0.813106709
## 4 49 39 20 21 28 2.1036853 0.909919817
## 5 36 30 7 6 23 2.7207204 0.842993810
## 6 48 32 17 21 28 1.6585176 0.948278640
## 7 15 8 3 2 11 8.0636055 0.233480358
## 8 19 13 4 6 16 5.4297159 0.489991858
## 9 42 24 15 21 19 6.7310480 0.346437416
## 10 42 22 21 15 32 7.5038963 0.276746442
## 11 46 33 18 17 30 0.7189583 0.994072750
## 12 39 30 10 10 21 1.0717589 0.982745155
## 13 62 47 21 21 28 7.1075459 0.311015896
## 14 45 28 14 11 24 2.2411774 0.896235346
## 15 42 32 8 20 17 8.9840447 0.174477376
## 16 44 30 2 10 22 5.2379335 0.513678558
## 17 41 27 17 16 26 1.3565387 0.968448764
## 18 39 24 2 13 23 7.0366743 0.317468899
## 19 34 24 5 7 22 1.6265123 0.950652462
## 20 49 37 10 21 29 6.8439920 0.335516127
## 21 51 40 12 21 33 6.3663983 0.383421159
## 22 65 36 21 21 32 6.0173855 0.421245369
## 23 41 29 12 15 26 0.3705984 0.999076587
## 24 48 30 20 19 32 2.1116410 0.909149825
## 25 53 44 21 21 32 3.0332160 0.804666481
## 26 55 37 11 14 18 8.8643512 0.181351813
## 27 57 39 21 21 26 4.5134611 0.607544283
## 28 34 27 6 3 29 6.8710631 0.332936083
## 29 47 32 13 12 28 0.9655089 0.986886063
## 30 47 30 21 20 30 2.4454943 0.874517154
## 31 47 28 15 7 31 7.6239220 0.266970244
## 32 53 34 21 21 37 3.0156719 0.806877286
## 33 53 29 13 17 20 5.8628566 0.438727174
## 34 22 27 9 7 14 6.6864568 0.350818948
## 35 39 29 10 8 31 3.7137290 0.715350479
## 36 64 34 10 19 34 9.8806977 0.129766844
## 37 37 28 13 10 20 2.2399711 0.896358813
## 38 49 33 12 20 31 3.9020324 0.689932204
## 39 51 43 21 21 22 7.5710988 0.271238690
## 40 36 24 8 15 26 4.0498971 0.669923930
## 41 40 23 0 7 17 5.6706401 0.461077029
## 42 38 37 15 16 20 4.4389423 0.617497440
## 43 42 27 21 21 24 4.9956938 0.544365544
## 44 31 28 16 7 24 7.3664231 0.288285234
## 45 45 19 10 6 27 7.9496655 0.241814187
## 46 54 27 21 21 33 5.4018830 0.493393884
## 47 44 26 1 6 23 5.8847530 0.436223318
## 48 36 24 6 14 33 11.0188981 0.087794118
## 49 41 28 6 11 27 2.5497426 0.862859666
## 50 49 38 21 21 28 2.2156985 0.898830485
## 51 34 17 9 14 20 4.4630094 0.614278623
## 52 30 28 15 16 23 3.9581806 0.682335849
## 53 49 25 5 17 24 8.9306942 0.177513460
## 54 48 36 11 15 24 1.7579537 0.940560029
## 55 65 40 21 21 35 4.3584189 0.628293706
## 56 35 35 0 7 13 10.7052890 0.097923184
## 57 40 17 6 5 22 5.8159530 0.444119938
## 58 43 29 15 12 25 1.3618352 0.968138821
## 59 50 37 21 21 29 1.7962547 0.937451068
## 60 38 22 10 12 22 1.2181687 0.975977794
## 61 42 40 16 16 30 3.4685966 0.748142983
## 62 61 41 21 21 35 2.7829963 0.835548576
## 63 38 28 11 9 25 1.0947395 0.981764594
## 64 40 27 0 1 24 8.2617083 0.219550064
## 65 45 22 5 11 24 4.5571907 0.601722781
## 66 40 22 0 0 20 8.5031706 0.203506953
## 67 40 17 0 5 25 8.7611557 0.187462759
## 68 42 29 14 15 35 4.0078109 0.675619331
## 69 30 23 7 6 19 1.8558924 0.932463429
## 70 59 28 16 20 27 6.9096074 0.329287675
## 71 15 16 0 0 16 7.2818731 0.295564553
## 72 44 35 13 18 19 3.8342739 0.699091903
## 73 29 21 11 9 20 2.3390919 0.886023350
## 74 39 22 4 0 18 8.8289103 0.183431128
## 75 26 24 14 14 22 4.7854806 0.571606615
## 76 39 27 14 9 25 2.5651178 0.861109947
## 77 36 35 13 12 22 3.0129124 0.807224457
## 78 45 37 21 19 38 5.2546482 0.511591843
## 79 49 30 10 18 20 4.7832845 0.571893874
## 80 43 24 15 14 26 1.8404705 0.933770133
## 81 55 48 21 18 36 6.2472439 0.396071375
## 82 55 37 18 19 21 6.4868374 0.370916064
## 83 31 24 14 9 24 3.4129930 0.755507301
## 84 67 53 21 21 33 9.6106095 0.142037073
## 85 36 28 2 6 33 11.6654690 0.069861655
## 86 40 33 19 16 30 2.4451425 0.874555867
## 87 64 44 21 21 44 6.8826532 0.331835923
## 88 50 36 17 21 25 2.2473005 0.895607785
## 89 43 29 14 17 20 2.1966069 0.900757712
## 90 41 22 6 4 24 4.9288440 0.552971345
## 91 52 34 16 21 37 4.4619380 0.614421829
## 92 41 32 12 13 22 0.7682358 0.992898430
## 93 48 35 21 21 24 3.6800384 0.719882464
## 94 25 21 12 13 18 4.5478958 0.602958896
## 95 38 31 21 15 29 4.5578482 0.601635364
## 96 38 28 17 14 27 1.7707145 0.939532495
## 97 33 29 16 19 26 4.7801646 0.572302045
## 98 27 25 0 4 14 4.9682694 0.547889226
## 99 39 20 6 15 22 5.7248599 0.454706256
## 100 61 44 21 21 37 3.3637264 0.762003079
## 101 37 36 13 8 32 7.4694294 0.279605029
## 102 42 29 6 19 24 8.6734896 0.192790483
## 103 50 47 21 21 37 6.6568346 0.353751420
## 104 24 14 3 8 17 5.0307159 0.539879654
## 105 52 36 4 15 21 9.0777325 0.169253415
## 106 70 39 21 21 45 10.0560288 0.122312267
## 107 38 27 14 17 23 1.5288360 0.957556124
## 108 33 27 16 12 16 6.0308485 0.419743306
## 109 21 16 1 5 10 5.2559636 0.511427799
## 110 57 39 19 20 35 1.8648451 0.931699491
## 111 45 41 15 7 27 10.6150045 0.101029245
## 112 51 38 21 21 30 1.6873287 0.946095314
## 113 44 28 14 12 18 4.5813481 0.598513416
## 114 34 23 9 16 18 4.0400851 0.671251674
## 115 44 20 3 4 23 7.2696566 0.296627898
## 116 30 30 11 7 18 4.9300695 0.552813086
## 117 53 28 13 21 21 7.4762075 0.279041064
## 118 46 31 21 19 24 3.7478258 0.710757843
## 119 60 34 17 21 32 3.6035658 0.730143600
## 120 46 27 6 12 27 3.3620057 0.762229421
## 121 44 35 12 15 19 3.0993083 0.796283390
## 122 42 29 13 17 40 11.0513619 0.086801944
## 123 42 34 20 21 24 3.5791816 0.733406864
## 124 32 21 11 8 21 2.3242026 0.887600120
## 125 44 27 21 18 29 3.5069912 0.743039317
## 126 38 31 5 7 22 2.8437256 0.828189865
## 127 61 44 21 21 33 3.4227883 0.754212421
## 128 51 27 19 15 22 7.9212865 0.243926981
## 129 55 38 14 13 27 3.9142792 0.688275671
## 130 50 29 21 21 36 4.5252836 0.605968942
## 131 19 18 6 3 20 6.0477398 0.417863552
## 132 43 35 11 17 28 2.5664723 0.860955439
## 133 53 37 19 19 38 2.9555041 0.814410481
## 134 56 35 11 21 29 5.8177240 0.443915602
## 135 31 23 15 14 25 3.3017696 0.770128865
## 136 41 22 12 11 22 2.3258970 0.887421108
## 137 48 31 21 21 30 2.3362293 0.886327154
## 138 32 29 0 11 22 9.5748342 0.143736547
## 139 38 15 12 6 18 10.2071662 0.116194554
## 140 37 26 11 20 17 7.0950424 0.312147156
## 141 44 27 10 7 22 3.8339529 0.699135271
## 142 26 10 3 8 20 7.8258925 0.251138437
## 143 50 36 21 21 32 1.5587875 0.955494309
## 144 49 33 8 13 19 4.5037596 0.608837803
## 145 27 11 2 1 18 6.5953287 0.359895743
## 146 26 20 0 0 17 5.0372735 0.539041483
## 147 43 32 17 19 31 1.7719848 0.939429753
## 148 46 38 10 17 27 3.5687925 0.734795803
## 149 33 31 2 6 14 6.2069976 0.400406264
## 150 54 44 21 21 36 3.2935589 0.771201852
## 151 43 22 21 21 25 7.4590016 0.280474401
## 152 33 18 7 7 18 2.6061522 0.856403515
## 153 63 35 21 21 35 4.6848687 0.584818237
## 154 58 35 20 21 34 2.4627492 0.872613108
## 155 32 19 0 1 21 5.5575634 0.474526870
## 156 15 8 0 2 10 7.7544807 0.256648177
## 157 51 31 13 10 27 3.8503406 0.696920987
## 158 35 27 12 17 24 2.7126310 0.843953118
## 159 37 28 12 21 27 7.3011260 0.293894696
## 160 38 26 16 21 19 6.0662004 0.415815306
## 161 35 19 7 10 22 2.2135196 0.899051192
## 162 40 40 15 17 29 4.4283896 0.618910014
## 163 49 36 14 17 25 1.1766202 0.978023762
## 164 63 55 21 21 38 9.9002918 0.128914234
## 165 61 33 21 21 39 5.6132728 0.467873232
## 166 36 25 16 16 20 3.0605233 0.801213092
## 167 43 28 12 3 35 12.6258208 0.049378087
## 168 41 33 21 21 39 8.7533826 0.187930058
## 169 43 34 20 21 28 2.4721870 0.871567301
## 170 44 37 21 15 38 6.8853795 0.331577518
## 171 45 35 13 13 26 0.8768196 0.989854030
## 172 40 39 16 19 32 5.5430388 0.476270218
## 173 53 32 6 14 28 5.6223435 0.466794871
## 174 49 37 10 14 26 2.1981598 0.900601510
## 175 33 27 14 15 23 1.9514308 0.924110829
## 176 32 19 6 6 24 3.0255395 0.805634590
## 177 18 24 3 0 15 8.4077968 0.209722904
## 178 53 39 21 21 27 2.9054697 0.820614479
## 179 34 18 9 11 18 2.8656911 0.825505442
## 180 17 18 3 7 14 6.2214006 0.398851343
## 181 48 19 11 18 19 10.0371285 0.123097139
## 182 19 17 7 8 14 5.2603813 0.510877055
## 183 50 39 9 15 21 5.3114603 0.504530505
## 184 38 30 5 8 20 2.3568782 0.884128841
## 185 36 21 0 4 17 4.6686803 0.586953378
## 186 55 39 21 21 35 1.8404487 0.933771974
## 187 52 45 21 20 35 4.0368043 0.671695636
## 188 24 17 8 6 24 5.3191882 0.503573761
## 189 45 25 4 9 26 4.3599832 0.628083606
## 190 48 35 10 15 23 2.1511880 0.905281781
## 191 45 35 15 15 31 1.0061962 0.985376342
## 192 33 28 2 1 16 6.7205876 0.347461721
## 193 51 35 15 17 16 8.6877687 0.191914075
## 194 48 37 21 21 30 1.8354621 0.934191977
## 195 55 42 21 20 31 2.5243151 0.865736556
## 196 49 51 21 21 38 11.0937602 0.085521527
## 197 35 26 15 16 24 1.9133737 0.927490613
## 198 44 32 14 7 29 5.6250169 0.466477318
## 199 42 24 21 15 30 5.8416289 0.441162902
## 200 36 40 5 10 16 10.4327387 0.107571891
## 201 52 26 21 21 28 5.8402828 0.441317636
## 202 57 34 19 18 40 4.9971579 0.544177690
## 203 59 37 21 19 44 7.3651343 0.288395139
## 204 35 25 19 16 30 4.8857294 0.558550663
## 205 55 35 21 21 38 3.2515054 0.776682382
## 206 47 23 6 12 25 4.6027428 0.595675183
## 207 39 32 16 11 18 6.1600572 0.405501476
## 208 58 40 21 21 31 2.4689815 0.871922845
## 209 46 26 20 21 30 4.1276406 0.659407911
## 210 34 34 21 21 28 7.3511807 0.289587207
## 211 50 27 21 21 33 4.5749208 0.599366829
## 212 30 34 10 12 18 5.4051976 0.492988092
## 213 39 31 10 8 22 2.1503376 0.905365660
## 214 25 18 2 0 20 5.4970354 0.481815085
## 215 38 29 10 14 28 1.9476871 0.924446343
## 216 42 29 20 21 24 3.8480833 0.697226037
## 217 63 43 18 18 34 4.4257130 0.619268409
## 218 15 8 0 0 10 7.8606793 0.248489068
## 219 34 20 1 0 21 6.2809589 0.392464032
## 220 38 36 19 21 26 4.8547117 0.562578120
## 221 44 21 7 12 28 4.8735932 0.560125148
## 222 55 41 21 21 23 6.6276272 0.356659846
## 223 20 16 9 7 12 6.4682117 0.372831355
## 224 43 47 10 13 23 11.0287462 0.087492049
## 225 28 23 11 14 18 3.5629278 0.735579474
## 226 55 43 21 21 39 3.9639258 0.681558393
## 227 62 46 21 21 38 4.2490813 0.643009144
## 228 43 30 7 14 21 2.4441100 0.874669461
## 229 17 28 0 0 11 12.1975176 0.057704729
## 230 42 37 9 13 24 2.8311796 0.829717774
## 231 45 33 6 15 28 5.2129444 0.516805986
## 232 28 25 5 5 22 3.2745269 0.773685325
## 233 46 38 20 21 40 7.1392352 0.308162638
## 234 57 42 21 21 36 2.4370513 0.875445043
## 235 45 28 21 15 31 4.3410687 0.630624897
## 236 49 33 19 19 29 0.9880898 0.986059785
## 237 21 16 12 4 16 9.3156921 0.156586660
## 238 59 31 21 21 35 4.8323318 0.565490864
## 239 52 38 21 21 40 4.4998383 0.609360839
## 240 43 31 13 21 20 5.0528246 0.537056056
## 241 46 31 13 13 26 0.4424174 0.998470300
## 242 32 32 1 6 11 9.0753799 0.169382931
## 243 59 42 21 21 37 2.7630627 0.837943090
## 244 47 29 13 14 28 0.6159486 0.996128635
## 245 32 32 16 19 14 9.7062208 0.137581397
## 246 57 48 21 21 43 8.2308927 0.221670903
## 247 50 41 17 17 38 4.6848533 0.584820263
## 248 36 24 12 14 16 3.2564382 0.776040862
## 249 20 25 8 17 25 17.2396368 0.008441741
## 250 63 38 21 21 35 3.7510043 0.710329438
fomo$p[fomo$p<.001]
## numeric(0)
which(fomo$p<.001)
## integer(0)
Interpretation: As we can see from the result, we have no outliers.
colSums(is.na(fomo))
## AnxAtt Boredom Anxiety Dep FOMO mahal p
## 0 0 0 0 0 0 0
Interpretation: There is no missing data in fomo dataset. We shall proceed with our analysis.
m2<- '
Boredom ~ Anxiety + Dep
FOMO ~ Boredom
Anxiety ~~ Dep
'
#model fit
fit2_pa<- sem(m2, data=fomo) #fit the model
#inspect model specification
inspect(fit2_pa)
## $lambda
## Boredm FOMO Anxity Dep
## Boredom 0 0 0 0
## FOMO 0 0 0 0
## Anxiety 0 0 0 0
## Dep 0 0 0 0
##
## $theta
## Boredm FOMO Anxity Dep
## Boredom 0
## FOMO 0 0
## Anxiety 0 0 0
## Dep 0 0 0 0
##
## $psi
## Boredm FOMO Anxity Dep
## Boredom 5
## FOMO 0 6
## Anxiety 0 0 7
## Dep 0 0 4 8
##
## $beta
## Boredm FOMO Anxity Dep
## Boredom 0 0 1 2
## FOMO 3 0 0 0
## Anxiety 0 0 0 0
## Dep 0 0 0 0
To ensure that we have correctly specify the model, we can use inspect() to check model specification. It allows us to examine the number of estimated parameters in the model. #### Parameter Interpretation
# interpret estimated parameters
summary(fit2_pa, fit.measures=TRUE,standardized=TRUE,rsquare=TRUE)
## lavaan 0.6-18 ended normally after 22 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 8
##
## Number of observations 250
##
## Model Test User Model:
##
## Test statistic 64.158
## Degrees of freedom 2
## P-value (Chi-square) 0.000
##
## Model Test Baseline Model:
##
## Test statistic 591.315
## Degrees of freedom 6
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 0.894
## Tucker-Lewis Index (TLI) 0.681
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -3126.825
## Loglikelihood unrestricted model (H1) -3094.746
##
## Akaike (AIC) 6269.650
## Bayesian (BIC) 6297.821
## Sample-size adjusted Bayesian (SABIC) 6272.461
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.353
## 90 Percent confidence interval - lower 0.281
## 90 Percent confidence interval - upper 0.429
## P-value H_0: RMSEA <= 0.050 0.000
## P-value H_0: RMSEA >= 0.080 1.000
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.114
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Boredom ~
## Anxiety 0.341 0.100 3.423 0.001 0.341 0.274
## Dep 0.558 0.105 5.325 0.000 0.558 0.426
## FOMO ~
## Boredom 0.531 0.044 12.203 0.000 0.531 0.611
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Anxiety ~~
## Dep 34.908 3.515 9.932 0.000 34.908 0.807
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .Boredom 39.350 3.520 11.180 0.000 39.350 0.556
## .FOMO 33.498 2.996 11.180 0.000 33.498 0.627
## Anxiety 45.474 4.067 11.180 0.000 45.474 1.000
## Dep 41.122 3.678 11.180 0.000 41.122 1.000
##
## R-Square:
## Estimate
## Boredom 0.444
## FOMO 0.373
inspect(fit2_pa,"sampstat") #sample stat generated by lavaan
## $cov
## Boredm FOMO Anxity Dep
## Boredom 70.758
## FOMO 37.575 53.452
## Anxiety 35.004 33.174 45.474
## Dep 34.867 28.250 34.908 41.122
fitted(fit2_pa)
## $cov
## Boredm FOMO Anxity Dep
## Boredom 70.758
## FOMO 37.575 53.452
## Anxiety 35.004 18.588 45.474
## Dep 34.867 18.516 34.908 41.122
The sample covariance matrix and implied covariance matrix are very similar, with only small differences. #### Model fit
# Overall Model fit
summary(fit2_pa, fit.measures=TRUE,standardized=TRUE,rsquare=TRUE)
## lavaan 0.6-18 ended normally after 22 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 8
##
## Number of observations 250
##
## Model Test User Model:
##
## Test statistic 64.158
## Degrees of freedom 2
## P-value (Chi-square) 0.000
##
## Model Test Baseline Model:
##
## Test statistic 591.315
## Degrees of freedom 6
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 0.894
## Tucker-Lewis Index (TLI) 0.681
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -3126.825
## Loglikelihood unrestricted model (H1) -3094.746
##
## Akaike (AIC) 6269.650
## Bayesian (BIC) 6297.821
## Sample-size adjusted Bayesian (SABIC) 6272.461
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.353
## 90 Percent confidence interval - lower 0.281
## 90 Percent confidence interval - upper 0.429
## P-value H_0: RMSEA <= 0.050 0.000
## P-value H_0: RMSEA >= 0.080 1.000
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.114
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Boredom ~
## Anxiety 0.341 0.100 3.423 0.001 0.341 0.274
## Dep 0.558 0.105 5.325 0.000 0.558 0.426
## FOMO ~
## Boredom 0.531 0.044 12.203 0.000 0.531 0.611
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Anxiety ~~
## Dep 34.908 3.515 9.932 0.000 34.908 0.807
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .Boredom 39.350 3.520 11.180 0.000 39.350 0.556
## .FOMO 33.498 2.996 11.180 0.000 33.498 0.627
## Anxiety 45.474 4.067 11.180 0.000 45.474 1.000
## Dep 41.122 3.678 11.180 0.000 41.122 1.000
##
## R-Square:
## Estimate
## Boredom 0.444
## FOMO 0.373
# For all types of fit
fitmeasures(fit2_pa)
## npar fmin chisq
## 8.000 0.128 64.158
## df pvalue baseline.chisq
## 2.000 0.000 591.315
## baseline.df baseline.pvalue cfi
## 6.000 0.000 0.894
## tli nnfi rfi
## 0.681 0.681 0.675
## nfi pnfi ifi
## 0.892 0.297 0.895
## rni logl unrestricted.logl
## 0.894 -3126.825 -3094.746
## aic bic ntotal
## 6269.650 6297.821 250.000
## bic2 rmsea rmsea.ci.lower
## 6272.461 0.353 0.281
## rmsea.ci.upper rmsea.ci.level rmsea.pvalue
## 0.429 0.900 0.000
## rmsea.close.h0 rmsea.notclose.pvalue rmsea.notclose.h0
## 0.050 1.000 0.080
## rmr rmr_nomean srmr
## 5.545 5.545 0.114
## srmr_bentler srmr_bentler_nomean crmr
## 0.114 0.114 0.148
## crmr_nomean srmr_mplus srmr_mplus_nomean
## 0.148 0.114 0.114
## cn_05 cn_01 gfi
## 24.347 36.890 0.898
## agfi pgfi mfi
## 0.492 0.180 0.883
## ecvi
## 0.321
To understand why our model does not fit the data well, we can examine the residuals.
resid(fit2_pa)
## $type
## [1] "raw"
##
## $cov
## Boredm FOMO Anxity Dep
## Boredom 0.000
## FOMO 0.000 0.000
## Anxiety 0.000 14.586 0.000
## Dep 0.000 9.734 0.000 0.000
resid(fit2_pa,type="standardized")
## $type
## [1] "standardized"
##
## $cov
## Boredm FOMO Anxity Dep
## Boredom 0.000
## FOMO 0.000 0.000
## Anxiety 0.000 6.783 0.000
## Dep 0.000 5.140 0.000 0.000
resid(fit2_pa,type="normalized")
## $type
## [1] "normalized"
##
## $cov
## Boredm FOMO Anxity Dep
## Boredom 0.000
## FOMO 0.000 0.000
## Anxiety 0.000 3.881 0.000
## Dep 0.000 2.812 0.000 0.000
semPaths(fit2_pa, what="path", whatLabels="std", residuals=FALSE)
### 1.4 Model re-specification Modification indices (MI) are suggestions
for potential model specifications (through the inclusion of additional
parameters) that may result in an increase in model fit.
modindices(fit2_pa)
## lhs op rhs mi epc sepc.lv sepc.all sepc.nox
## 9 Boredom ~~ FOMO 46.311 -23.454 -23.454 -0.646 -0.646
## 12 FOMO ~~ Anxiety 19.338 6.492 6.492 0.166 0.166
## 13 FOMO ~~ Dep 1.190 -1.561 -1.561 -0.042 -0.042
## 14 Boredom ~ FOMO 46.311 -0.700 -0.700 -0.609 -0.609
## 15 FOMO ~ Anxiety 56.387 0.518 0.518 0.478 0.478
## 16 FOMO ~ Dep 29.537 0.407 0.407 0.357 0.357
## 18 Anxiety ~ FOMO 19.338 0.194 0.194 0.210 0.210
## 21 Dep ~ FOMO 1.190 -0.047 -0.047 -0.053 -0.053
For our model, the MI for FOMO ~ Anxiety is the largest suggesting by adding the path relationship from Anxiety to FOMO will improve our model fit.
m3<- '
Boredom ~ Anxiety + Dep
FOMO ~ Boredom + Anxiety #adding path between FOMO and Anxiety
Anxiety ~~ Dep
'
fit3_pa <- sem(m3, data=fomo)
inspect(fit3_pa)
## $lambda
## Boredm FOMO Anxity Dep
## Boredom 0 0 0 0
## FOMO 0 0 0 0
## Anxiety 0 0 0 0
## Dep 0 0 0 0
##
## $theta
## Boredm FOMO Anxity Dep
## Boredom 0
## FOMO 0 0
## Anxiety 0 0 0
## Dep 0 0 0 0
##
## $psi
## Boredm FOMO Anxity Dep
## Boredom 6
## FOMO 0 7
## Anxiety 0 0 8
## Dep 0 0 5 9
##
## $beta
## Boredm FOMO Anxity Dep
## Boredom 0 0 1 2
## FOMO 3 0 4 0
## Anxiety 0 0 0 0
## Dep 0 0 0 0
summary(fit3_pa, fit.measures=TRUE,standardized=TRUE,rsquare=TRUE)
## lavaan 0.6-18 ended normally after 27 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 9
##
## Number of observations 250
##
## Model Test User Model:
##
## Test statistic 0.258
## Degrees of freedom 1
## P-value (Chi-square) 0.612
##
## Model Test Baseline Model:
##
## Test statistic 591.315
## Degrees of freedom 6
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 1.000
## Tucker-Lewis Index (TLI) 1.008
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -3094.875
## Loglikelihood unrestricted model (H1) -3094.746
##
## Akaike (AIC) 6207.750
## Bayesian (BIC) 6239.443
## Sample-size adjusted Bayesian (SABIC) 6210.912
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.000
## 90 Percent confidence interval - lower 0.000
## 90 Percent confidence interval - upper 0.134
## P-value H_0: RMSEA <= 0.050 0.708
## P-value H_0: RMSEA >= 0.080 0.186
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.004
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Boredom ~
## Anxiety 0.341 0.100 3.423 0.001 0.341 0.274
## Dep 0.558 0.105 5.325 0.000 0.558 0.426
## FOMO ~
## Boredom 0.275 0.049 5.646 0.000 0.275 0.316
## Anxiety 0.518 0.061 8.533 0.000 0.518 0.478
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Anxiety ~~
## Dep 34.908 3.515 9.932 0.000 34.908 0.807
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .Boredom 39.350 3.520 11.180 0.000 39.350 0.556
## .FOMO 25.942 2.320 11.180 0.000 25.942 0.485
## Anxiety 45.474 4.067 11.180 0.000 45.474 1.000
## Dep 41.122 3.678 11.180 0.000 41.122 1.000
##
## R-Square:
## Estimate
## Boredom 0.444
## FOMO 0.515
anova(fit2_pa, fit3_pa)
##
## Chi-Squared Difference Test
##
## Df AIC BIC Chisq Chisq diff RMSEA Df diff Pr(>Chisq)
## fit3_pa 1 6207.7 6239.4 0.2577
## fit2_pa 2 6269.6 6297.8 64.1575 63.9 0.5016 1 1.309e-15 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
lavTestLRT(fit2_pa, fit3_pa)
##
## Chi-Squared Difference Test
##
## Df AIC BIC Chisq Chisq diff RMSEA Df diff Pr(>Chisq)
## fit3_pa 1 6207.7 6239.4 0.2577
## fit2_pa 2 6269.6 6297.8 64.1575 63.9 0.5016 1 1.309e-15 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#model diagram with parameters
semPaths(fit2_pa,what = "paths",whatLabels = "par",layout="tree3",residuals=T)
semPaths(fit3_pa,what = "paths",whatLabels = "par",layout="tree",residuals=T)
# More simple plots
lavaanPlot(model = fit2_pa, coefs = T)
lavaanPlot(model = fit3_pa, coefs = T)
# Easy parameter understanding
table_results(fit2_pa)
## label est_sig se pval confint
## 1 Boredom.ON.Anxiety 0.34*** 0.10 0.00 [0.15, 0.54]
## 2 Boredom.ON.Dep 0.56*** 0.10 0.00 [0.35, 0.76]
## 3 FOMO.ON.Boredom 0.53*** 0.04 0.00 [0.45, 0.62]
## 4 Anxiety.WITH.Dep 34.91*** 3.51 0.00 [28.02, 41.80]
## 5 Variances.Boredom 39.35*** 3.52 0.00 [32.45, 46.25]
## 6 Variances.FOMO 33.50*** 3.00 0.00 [27.63, 39.37]
## 7 Variances.Anxiety 45.47*** 4.07 0.00 [37.50, 53.45]
## 8 Variances.Dep 41.12*** 3.68 0.00 [33.91, 48.33]
table_results(fit3_pa)
## label est_sig se pval confint
## 1 Boredom.ON.Anxiety 0.34*** 0.10 0.00 [0.15, 0.54]
## 2 Boredom.ON.Dep 0.56*** 0.10 0.00 [0.35, 0.76]
## 3 FOMO.ON.Boredom 0.27*** 0.05 0.00 [0.18, 0.37]
## 4 FOMO.ON.Anxiety 0.52*** 0.06 0.00 [0.40, 0.64]
## 5 Anxiety.WITH.Dep 34.91*** 3.51 0.00 [28.02, 41.80]
## 6 Variances.Boredom 39.35*** 3.52 0.00 [32.45, 46.25]
## 7 Variances.FOMO 25.94*** 2.32 0.00 [21.39, 30.49]
## 8 Variances.Anxiety 45.47*** 4.07 0.00 [37.50, 53.45]
## 9 Variances.Dep 41.12*** 3.68 0.00 [33.91, 48.33]
# Another plot option
graph_sem(fit2_pa)
graph_sem(fit3_pa)
hbat<-read.csv("hbat_sem.csv")
str(hbat)
## 'data.frame': 400 obs. of 33 variables:
## $ ID : int 1 2 3 4 5 6 7 8 9 10 ...
## $ JS1: int 5 3 4 4 5 6 2 2 4 5 ...
## $ OC1: int 3 0 6 7 2 5 6 4 9 5 ...
## $ OC2: int 5 5 10 7 10 8 10 9 10 9 ...
## $ EP1: int 10 10 10 10 10 8 9 10 8 10 ...
## $ OC3: int 10 3 10 10 9 7 10 9 10 9 ...
## $ OC4: int 10 7 10 7 9 7 9 7 10 10 ...
## $ EP2: int 10 10 10 10 9 10 9 10 6 10 ...
## $ EP3: int 5 10 10 9 10 7 9 10 8 8 ...
## $ AC1: int 1 2 1 2 1 1 2 1 3 2 ...
## $ EP4: int 2 7 7 7 6 7 6 7 3 7 ...
## $ JS2: int 4 4 2 5 4 6 3 2 5 3 ...
## $ JS3: int 3 3 2 4 3 5 6 1 1 2 ...
## $ AC2: int 2 1 4 1 1 2 4 1 4 4 ...
## $ SI1: int 4 5 5 5 5 5 5 3 3 4 ...
## $ JS4: int 3 2 3 2 2 3 4 1 1 2 ...
## $ SI2: int 4 4 5 4 5 4 5 4 3 4 ...
## $ JS5: int 23 43 60 33 58 62 11 21 80 33 ...
## $ AC3: int 1 1 1 1 2 1 3 1 3 1 ...
## $ SI3: int 3 4 5 3 3 3 5 4 2 3 ...
## $ AC4: int 1 1 2 1 2 1 3 3 1 4 ...
## $ SI4: int 3 4 5 4 4 3 4 2 2 3 ...
## $ C1 : int 1 1 1 1 1 1 1 1 1 1 ...
## $ C2 : int 0 1 1 0 1 0 1 1 0 0 ...
## $ C3 : int 1 1 1 1 1 1 1 1 1 1 ...
## $ AGE: chr "42" "32" "43" "26" ...
## $ EXP: chr "6" "5.8" "1" "3" ...
## $ JP : chr "5" "4" "5" "5" ...
## $ JS : num -0.22499 -0.55478 -0.51594 -0.13101 0.00117 ...
## $ OC : num -0.364 -1.661 0.781 -0.33 0.321 ...
## $ SI : num -0.37 0.388 1.281 0.222 0.641 ...
## $ EP : chr "." "0.8679" "0.8679" "0.65938" ...
## $ AC : chr "-1.29457" "-1.25685" "-0.8274" "-1.25685" ...
names(hbat)
## [1] "ID" "JS1" "OC1" "OC2" "EP1" "OC3" "OC4" "EP2" "EP3" "AC1" "EP4" "JS2"
## [13] "JS3" "AC2" "SI1" "JS4" "SI2" "JS5" "AC3" "SI3" "AC4" "SI4" "C1" "C2"
## [25] "C3" "AGE" "EXP" "JP" "JS" "OC" "SI" "EP" "AC"
hbat.sem<-hbat[,1:27] #we excluded variables with the same names as latent variables
names(hbat.sem)
## [1] "ID" "JS1" "OC1" "OC2" "EP1" "OC3" "OC4" "EP2" "EP3" "AC1" "EP4" "JS2"
## [13] "JS3" "AC2" "SI1" "JS4" "SI2" "JS5" "AC3" "SI3" "AC4" "SI4" "C1" "C2"
## [25] "C3" "AGE" "EXP"
Based on published literature and some preliminary interviews with employees, HBAT initiated the research project focusing on five key constructs to study employee turnover problem. The five constructs of interest are Job satisfaction (JS), Organizational commitment (OC), Staying intentions (SI), Environmental perceptions (EP), and Attitudes toward coworkers (AC).
In this stage, researchers must specify the measurement model to be tested including relationships among constructs defined and the nature of each construct (reflective versus formative). In this case, all measures are hypothesized as reflective, the direction is from the latent construct to the measured items.
mcfa<- '
JS =~ JS1+JS2+JS3+JS4+JS5
AC =~ AC1+AC2+AC3+AC4
OC =~ OC1+OC2+OC3+OC4
EP =~ EP1+EP2+EP3+EP4
SI =~ SI1+SI2+SI3+SI4
'
fcfa<- cfa(mcfa,data=hbat.sem)
inspect(fcfa)
## $lambda
## JS AC OC EP SI
## JS1 0 0 0 0 0
## JS2 1 0 0 0 0
## JS3 2 0 0 0 0
## JS4 3 0 0 0 0
## JS5 4 0 0 0 0
## AC1 0 0 0 0 0
## AC2 0 5 0 0 0
## AC3 0 6 0 0 0
## AC4 0 7 0 0 0
## OC1 0 0 0 0 0
## OC2 0 0 8 0 0
## OC3 0 0 9 0 0
## OC4 0 0 10 0 0
## EP1 0 0 0 0 0
## EP2 0 0 0 11 0
## EP3 0 0 0 12 0
## EP4 0 0 0 13 0
## SI1 0 0 0 0 0
## SI2 0 0 0 0 14
## SI3 0 0 0 0 15
## SI4 0 0 0 0 16
##
## $theta
## JS1 JS2 JS3 JS4 JS5 AC1 AC2 AC3 AC4 OC1 OC2 OC3 OC4 EP1 EP2 EP3 EP4 SI1 SI2
## JS1 17
## JS2 0 18
## JS3 0 0 19
## JS4 0 0 0 20
## JS5 0 0 0 0 21
## AC1 0 0 0 0 0 22
## AC2 0 0 0 0 0 0 23
## AC3 0 0 0 0 0 0 0 24
## AC4 0 0 0 0 0 0 0 0 25
## OC1 0 0 0 0 0 0 0 0 0 26
## OC2 0 0 0 0 0 0 0 0 0 0 27
## OC3 0 0 0 0 0 0 0 0 0 0 0 28
## OC4 0 0 0 0 0 0 0 0 0 0 0 0 29
## EP1 0 0 0 0 0 0 0 0 0 0 0 0 0 30
## EP2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 31
## EP3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 32
## EP4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 33
## SI1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 34
## SI2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 35
## SI3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## SI4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
## SI3 SI4
## JS1
## JS2
## JS3
## JS4
## JS5
## AC1
## AC2
## AC3
## AC4
## OC1
## OC2
## OC3
## OC4
## EP1
## EP2
## EP3
## EP4
## SI1
## SI2
## SI3 36
## SI4 0 37
##
## $psi
## JS AC OC EP SI
## JS 38
## AC 43 39
## OC 44 47 40
## EP 45 48 50 41
## SI 46 49 51 52 42
In this stage, we examined the results of testing measurement theory by comparing the theoretical measurement model against reality, as represented by the sample covariance matrix. We first examined the overall fit through key GOF values, construct validity consisting of convergent validity, discriminant validity, and nomological validity. Then, path estimates, standardized residuals, and modification indices were further examined for model improvement.
summary(fcfa,standardized=T, fit.measures=T, rsquare=T)
## lavaan 0.6-18 ended normally after 54 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 52
##
## Number of observations 400
##
## Model Test User Model:
##
## Test statistic 240.600
## Degrees of freedom 179
## P-value (Chi-square) 0.001
##
## Model Test Baseline Model:
##
## Test statistic 4452.408
## Degrees of freedom 210
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 0.985
## Tucker-Lewis Index (TLI) 0.983
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -13916.694
## Loglikelihood unrestricted model (H1) -13796.393
##
## Akaike (AIC) 27937.387
## Bayesian (BIC) 28144.943
## Sample-size adjusted Bayesian (SABIC) 27979.944
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.029
## 90 Percent confidence interval - lower 0.019
## 90 Percent confidence interval - upper 0.038
## P-value H_0: RMSEA <= 0.050 1.000
## P-value H_0: RMSEA >= 0.080 0.000
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.036
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## JS =~
## JS1 1.000 0.990 0.740
## JS2 1.033 0.076 13.682 0.000 1.023 0.748
## JS3 0.903 0.072 12.515 0.000 0.894 0.680
## JS4 0.910 0.070 12.953 0.000 0.901 0.705
## JS5 15.190 1.133 13.410 0.000 15.042 0.731
## AC =~
## AC1 1.000 1.144 0.822
## AC2 1.236 0.067 18.392 0.000 1.414 0.820
## AC3 1.037 0.055 18.870 0.000 1.187 0.837
## AC4 1.146 0.063 18.255 0.000 1.312 0.815
## OC =~
## OC1 1.000 1.471 0.583
## OC2 1.314 0.108 12.209 0.000 1.934 0.886
## OC3 0.783 0.076 10.322 0.000 1.151 0.657
## OC4 1.165 0.097 11.968 0.000 1.714 0.836
## EP =~
## EP1 1.000 1.265 0.692
## EP2 1.033 0.073 14.083 0.000 1.307 0.803
## EP3 0.821 0.060 13.734 0.000 1.038 0.779
## EP4 0.914 0.064 14.335 0.000 1.156 0.823
## SI =~
## SI1 1.000 0.706 0.811
## SI2 1.073 0.055 19.563 0.000 0.757 0.864
## SI3 1.065 0.066 16.053 0.000 0.752 0.741
## SI4 1.167 0.061 19.230 0.000 0.823 0.852
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## JS ~~
## AC 0.057 0.065 0.866 0.386 0.050 0.050
## OC 0.304 0.090 3.391 0.001 0.209 0.209
## EP 0.303 0.078 3.893 0.000 0.242 0.242
## SI 0.161 0.042 3.834 0.000 0.230 0.230
## AC ~~
## OC 0.517 0.107 4.842 0.000 0.307 0.307
## EP 0.372 0.088 4.251 0.000 0.257 0.257
## SI 0.249 0.048 5.161 0.000 0.309 0.309
## OC ~~
## EP 0.925 0.143 6.469 0.000 0.497 0.497
## SI 0.574 0.080 7.191 0.000 0.553 0.553
## EP ~~
## SI 0.502 0.065 7.733 0.000 0.562 0.562
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .JS1 0.808 0.074 10.964 0.000 0.808 0.452
## .JS2 0.825 0.076 10.818 0.000 0.825 0.441
## .JS3 0.930 0.078 11.909 0.000 0.930 0.538
## .JS4 0.824 0.071 11.571 0.000 0.824 0.504
## .JS5 196.785 17.673 11.135 0.000 196.785 0.465
## .AC1 0.628 0.059 10.566 0.000 0.628 0.324
## .AC2 0.973 0.092 10.617 0.000 0.973 0.327
## .AC3 0.601 0.059 10.114 0.000 0.601 0.299
## .AC4 0.867 0.081 10.742 0.000 0.867 0.335
## .OC1 4.196 0.318 13.177 0.000 4.196 0.660
## .OC2 1.029 0.149 6.929 0.000 1.029 0.216
## .OC3 1.745 0.137 12.700 0.000 1.745 0.568
## .OC4 1.267 0.138 9.193 0.000 1.267 0.301
## .EP1 1.744 0.143 12.222 0.000 1.744 0.522
## .EP2 0.937 0.091 10.245 0.000 0.937 0.354
## .EP3 0.699 0.064 10.860 0.000 0.699 0.393
## .EP4 0.637 0.066 9.659 0.000 0.637 0.323
## .SI1 0.258 0.023 11.121 0.000 0.258 0.342
## .SI2 0.195 0.021 9.479 0.000 0.195 0.253
## .SI3 0.464 0.038 12.264 0.000 0.464 0.451
## .SI4 0.256 0.026 9.952 0.000 0.256 0.274
## JS 0.981 0.122 8.034 0.000 1.000 1.000
## AC 1.309 0.135 9.664 0.000 1.000 1.000
## OC 2.164 0.357 6.058 0.000 1.000 1.000
## EP 1.600 0.214 7.471 0.000 1.000 1.000
## SI 0.498 0.052 9.526 0.000 1.000 1.000
##
## R-Square:
## Estimate
## JS1 0.548
## JS2 0.559
## JS3 0.462
## JS4 0.496
## JS5 0.535
## AC1 0.676
## AC2 0.673
## AC3 0.701
## AC4 0.665
## OC1 0.340
## OC2 0.784
## OC3 0.432
## OC4 0.699
## EP1 0.478
## EP2 0.646
## EP3 0.607
## EP4 0.677
## SI1 0.658
## SI2 0.747
## SI3 0.549
## SI4 0.726
Chi-square, CFI, and RMSEA are evaluated. Reults show that HBAT measurement model provides a reasonably good fit. #### Construct Validity
# convergent validity
#AVE
(0.740^2+0.748^2+0.680^2+0.705^2+0.731^2)/5 #JS
## [1] 0.520178
(0.822^2+0.820^2+0.837^2+0.815^2)/4 #AC
## [1] 0.6782195
(0.583^2+0.886^2+0.657^2+0.836^2)/4 #OC
## [1] 0.5638575
(0.692^2+0.803^2+0.779^2+0.823^2)/4 #EP
## [1] 0.6019608
(0.811^2+0.864^2+0.741^2+0.852^2)/4 #SI
## [1] 0.6698005
An AVE of less than .5 indicates that, on average, more error remains in the items than variance held in common with the latent factor upon which they load. We have adequate convergent validity.
# discriminant validity
inspect(fcfa, "cor.lv")
## JS AC OC EP SI
## JS 1.000
## AC 0.050 1.000
## OC 0.209 0.307 1.000
## EP 0.242 0.257 0.497 1.000
## SI 0.230 0.309 0.553 0.562 1.000
lvcorr<- inspect(fcfa, "cor.lv")
lvcorr^2
## JS AC OC EP SI
## JS 1.000
## AC 0.002 1.000
## OC 0.044 0.094 1.000
## EP 0.058 0.066 0.247 1.000
## SI 0.053 0.095 0.306 0.316 1.000
AVE estimates for two fsctors should be greater than the square of the correlation between two factors to provide evidence of discriminant vlaidity. All AVE estimates are greater than the corresponding interconstruct squared correlation estimates. Therefore, this test indicates there are no problems with discriminant validity for the HBAT CFA model.
# Nomological validity
inspect(fcfa, "cor.lv")
## JS AC OC EP SI
## JS 1.000
## AC 0.050 1.000
## OC 0.209 0.307 1.000
## EP 0.242 0.257 0.497 1.000
## SI 0.230 0.309 0.553 0.562 1.000
The correlation matrix provides a useful start in this effort to the extent that the constructs are expected to relate to one another.
# reliability
semTools::reliability(fcfa)
## JS AC OC EP SI
## alpha 0.2809951 0.8907652 0.8227408 0.8474340 0.8863459
## omega 0.6396532 0.8928576 0.8267927 0.8496932 0.8871747
## omega2 0.6396532 0.8928576 0.8267927 0.8496932 0.8871747
## omega3 0.6404932 0.8928422 0.8180947 0.8498606 0.8870258
## avevar 0.5345483 0.6772156 0.5524543 0.5874690 0.6635062
Construct reliability should be .7 or higher to indicate adequate internal consistency. the alpha for JS construct did not reach expected level. The internal reliability does not hold.
resid(fcfa,"standardized")
## $type
## [1] "standardized"
##
## $cov
## JS1 JS2 JS3 JS4 JS5 AC1
## JS1 0.000
## JS2 0.072 0.000
## JS3 0.616 -0.736 0.000
## JS4 -0.597 -0.258 1.442 0.000
## JS5 0.267 0.837 -1.542 -0.286 759379.280
## AC1 0.277 -0.962 0.401 0.472 2.214 0.000
## AC2 -0.779 -1.277 -0.453 -0.424 1.342 -0.126
## AC3 -1.493 -1.634 -1.020 0.441 0.589 0.318
## AC4 -0.314 -0.604 1.022 0.572 2.345 0.002
## OC1 -0.697 -0.833 -0.228 -0.660 -0.091 -2.231
## OC2 0.134 -1.549 0.166 -0.878 1.616 1.511
## OC3 1.398 0.194 -0.240 0.791 1.001 -0.829
## OC4 0.945 -1.176 -0.198 -0.809 1.327 -1.945
## EP1 -1.495 1.086 0.774 -0.108 0.944 -0.165
## EP2 -0.595 0.651 -0.334 -0.596 -0.038 0.088
## EP3 -1.004 0.470 0.506 -0.517 1.979 0.650
## EP4 -1.733 0.396 0.354 -0.533 0.503 -0.963
## SI1 -0.691 -0.472 0.609 -0.344 1.182 0.770
## SI2 -0.991 -0.700 -0.832 -1.436 0.721 -0.655
## SI3 -2.056 -0.623 1.155 0.666 0.102 0.417
## SI4 -0.172 0.146 1.242 0.834 2.054 -0.445
## AC2 AC3 AC4 OC1 OC2 OC3
## JS1
## JS2
## JS3
## JS4
## JS5
## AC1
## AC2 0.000
## AC3 0.393 0.000
## AC4 0.184 -0.808 0.000
## OC1 -1.062 -1.375 -0.420 0.000
## OC2 -0.434 1.692 2.368 0.626 0.000
## OC3 -1.967 -1.246 -0.095 2.125 -2.807 0.000
## OC4 -1.341 -0.272 1.131 -0.197 -0.581 1.273
## EP1 0.351 0.213 0.216 -2.154 -1.706 0.266
## EP2 0.184 -0.284 1.788 -0.116 0.782 3.065
## EP3 -1.011 -0.503 0.665 -2.644 0.194 1.655
## EP4 -0.346 -1.469 1.182 -0.176 -0.956 2.627
## SI1 -0.308 -0.138 0.086 -2.145 1.428 -1.959
## SI2 -0.939 -0.462 0.241 -2.556 2.377 -1.027
## SI3 -0.631 0.225 0.225 -2.613 -0.009 -1.807
## SI4 -0.339 0.924 1.329 -2.751 1.916 -1.447
## OC4 EP1 EP2 EP3 EP4 SI1
## JS1
## JS2
## JS3
## JS4
## JS5
## AC1
## AC2
## AC3
## AC4
## OC1
## OC2
## OC3
## OC4 0.000
## EP1 -0.260 0.000
## EP2 1.595 2.481 0.000
## EP3 -1.177 -2.160 -1.068 0.000
## EP4 -0.856 -0.993 -2.871 3.464 0.000
## SI1 0.139 0.783 -0.622 -2.103 0.693 0.000
## SI2 -0.528 1.114 -0.004 -2.078 -1.346 3.373
## SI3 -1.272 1.169 0.384 -0.785 0.554 -1.444
## SI4 0.818 1.467 2.593 -0.387 -0.111 -2.587
## SI2 SI3 SI4
## JS1
## JS2
## JS3
## JS4
## JS5
## AC1
## AC2
## AC3
## AC4
## OC1
## OC2
## OC3
## OC4
## EP1
## EP2
## EP3
## EP4
## SI1
## SI2 0.000
## SI3 -1.391 0.000
## SI4 -1.875 2.821 0.000
For reflective models, =~ is used to specify the measurement models
m1<- '
JS =~ JS1+JS2+JS3+JS4+JS5
AC =~ AC1+AC2+AC3+AC4
OC =~ OC1+OC2+OC3+OC4
EP =~ EP1+EP2+EP3+EP4
SI =~ SI1+SI2+SI3+SI4
JS ~ EP +AC
OC ~ EP + AC + JS
SI ~ JS + OC
EP ~~ AC
'
fit1_SEM<- sem(model=m1, data=hbat.sem)
summary(fit1_SEM, standardized=T, rsquare=T, fit.measures=T)
## lavaan 0.6-18 ended normally after 41 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 50
##
## Number of observations 400
##
## Model Test User Model:
##
## Test statistic 287.040
## Degrees of freedom 181
## P-value (Chi-square) 0.000
##
## Model Test Baseline Model:
##
## Test statistic 4452.408
## Degrees of freedom 210
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 0.975
## Tucker-Lewis Index (TLI) 0.971
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -13939.913
## Loglikelihood unrestricted model (H1) -13796.393
##
## Akaike (AIC) 27979.827
## Bayesian (BIC) 28179.400
## Sample-size adjusted Bayesian (SABIC) 28020.747
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.038
## 90 Percent confidence interval - lower 0.030
## 90 Percent confidence interval - upper 0.046
## P-value H_0: RMSEA <= 0.050 0.992
## P-value H_0: RMSEA >= 0.080 0.000
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.060
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## JS =~
## JS1 1.000 0.988 0.739
## JS2 1.036 0.076 13.663 0.000 1.024 0.748
## JS3 0.905 0.072 12.497 0.000 0.894 0.680
## JS4 0.912 0.071 12.928 0.000 0.901 0.704
## JS5 15.234 1.138 13.392 0.000 15.050 0.732
## AC =~
## AC1 1.000 1.144 0.822
## AC2 1.236 0.067 18.384 0.000 1.414 0.820
## AC3 1.037 0.055 18.847 0.000 1.186 0.837
## AC4 1.147 0.063 18.261 0.000 1.313 0.816
## OC =~
## OC1 1.000 1.455 0.577
## OC2 1.328 0.110 12.080 0.000 1.932 0.885
## OC3 0.790 0.077 10.217 0.000 1.149 0.656
## OC4 1.172 0.099 11.803 0.000 1.705 0.832
## EP =~
## EP1 1.000 1.253 0.685
## EP2 1.040 0.075 13.855 0.000 1.303 0.802
## EP3 0.835 0.061 13.633 0.000 1.046 0.785
## EP4 0.924 0.065 14.130 0.000 1.157 0.824
## SI =~
## SI1 1.000 0.707 0.813
## SI2 1.076 0.055 19.670 0.000 0.761 0.869
## SI3 1.058 0.066 15.976 0.000 0.749 0.738
## SI4 1.158 0.061 19.117 0.000 0.819 0.848
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## JS ~
## EP 0.199 0.049 4.040 0.000 0.252 0.252
## AC -0.010 0.051 -0.188 0.851 -0.011 -0.011
## OC ~
## EP 0.523 0.079 6.627 0.000 0.450 0.450
## AC 0.255 0.068 3.745 0.000 0.200 0.200
## JS 0.126 0.078 1.608 0.108 0.085 0.085
## SI ~
## JS 0.087 0.036 2.383 0.017 0.121 0.121
## OC 0.269 0.032 8.284 0.000 0.553 0.553
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## AC ~~
## EP 0.368 0.087 4.244 0.000 0.257 0.257
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .JS1 0.813 0.074 11.002 0.000 0.813 0.454
## .JS2 0.824 0.076 10.814 0.000 0.824 0.440
## .JS3 0.930 0.078 11.910 0.000 0.930 0.538
## .JS4 0.825 0.071 11.576 0.000 0.825 0.504
## .JS5 196.531 17.656 11.131 0.000 196.531 0.465
## .AC1 0.628 0.060 10.559 0.000 0.628 0.324
## .AC2 0.972 0.092 10.606 0.000 0.972 0.327
## .AC3 0.603 0.060 10.118 0.000 0.603 0.300
## .AC4 0.865 0.081 10.720 0.000 0.865 0.334
## .OC1 4.244 0.321 13.227 0.000 4.244 0.667
## .OC2 1.038 0.144 7.212 0.000 1.038 0.218
## .OC3 1.751 0.137 12.739 0.000 1.751 0.570
## .OC4 1.298 0.136 9.547 0.000 1.298 0.309
## .EP1 1.775 0.145 12.250 0.000 1.775 0.531
## .EP2 0.945 0.093 10.200 0.000 0.945 0.358
## .EP3 0.682 0.064 10.633 0.000 0.682 0.384
## .EP4 0.634 0.067 9.510 0.000 0.634 0.321
## .SI1 0.256 0.023 11.006 0.000 0.256 0.339
## .SI2 0.189 0.021 9.173 0.000 0.189 0.246
## .SI3 0.469 0.038 12.259 0.000 0.469 0.456
## .SI4 0.263 0.026 9.998 0.000 0.263 0.282
## .JS 0.915 0.115 7.940 0.000 0.938 0.938
## AC 1.309 0.136 9.661 0.000 1.000 1.000
## .OC 1.443 0.247 5.840 0.000 0.682 0.682
## EP 1.569 0.213 7.364 0.000 1.000 1.000
## .SI 0.326 0.036 8.943 0.000 0.652 0.652
##
## R-Square:
## Estimate
## JS1 0.546
## JS2 0.560
## JS3 0.462
## JS4 0.496
## JS5 0.535
## AC1 0.676
## AC2 0.673
## AC3 0.700
## AC4 0.666
## OC1 0.333
## OC2 0.782
## OC3 0.430
## OC4 0.691
## EP1 0.469
## EP2 0.642
## EP3 0.616
## EP4 0.679
## SI1 0.661
## SI2 0.754
## SI3 0.544
## SI4 0.718
## JS 0.062
## OC 0.318
## SI 0.348
fitmeasures(fit1_SEM)
## npar fmin chisq
## 50.000 0.359 287.040
## df pvalue baseline.chisq
## 181.000 0.000 4452.408
## baseline.df baseline.pvalue cfi
## 210.000 0.000 0.975
## tli nnfi rfi
## 0.971 0.971 0.925
## nfi pnfi ifi
## 0.936 0.806 0.975
## rni logl unrestricted.logl
## 0.975 -13939.913 -13796.393
## aic bic ntotal
## 27979.827 28179.400 400.000
## bic2 rmsea rmsea.ci.lower
## 28020.747 0.038 0.030
## rmsea.ci.upper rmsea.ci.level rmsea.pvalue
## 0.046 0.900 0.992
## rmsea.close.h0 rmsea.notclose.pvalue rmsea.notclose.h0
## 0.050 0.000 0.080
## rmr rmr_nomean srmr
## 0.410 0.410 0.060
## srmr_bentler srmr_bentler_nomean crmr
## 0.060 0.060 0.063
## crmr_nomean srmr_mplus srmr_mplus_nomean
## 0.063 0.060 0.060
## cn_05 cn_01 gfi
## 298.367 318.975 0.938
## agfi pgfi mfi
## 0.921 0.735 0.876
## ecvi
## 0.968
semTools::reliability(fit1_SEM)
## JS AC OC EP SI
## alpha 0.2809951 0.8907652 0.8227408 0.8474340 0.8863459
## omega 0.6401053 0.8928834 0.8237614 0.8487858 0.8868089
## omega2 0.6401053 0.8928834 0.8237614 0.8487858 0.8868089
## omega3 0.6409479 0.8928948 0.8101963 0.8477886 0.8860906
## avevar 0.5351300 0.6772825 0.5473606 0.5855465 0.6626318
semPaths(fit1_SEM,rotation =2,layout = "tree2",style = "lisrel")
semPaths(fit1_SEM,what = "path", whatLabels = "par", residuals = T,rotation =2,layout = "tree2",style = "lisrel",groups = "latents")
semPaths(fit1_SEM, what = "path",whatLabels="par", rotation = 2, layout="tree2",structural=T)
# More simple plots
lavaanPlot(model = fit1_SEM, coefs = T)
# Easy parameter understanding
table_results(fit1_SEM)
## label est_sig se pval confint
## 1 JS.BY.JS1 1.00 0.00 <NA> [1.00, 1.00]
## 2 JS.BY.JS2 1.04*** 0.08 0.00 [0.89, 1.18]
## 3 JS.BY.JS3 0.90*** 0.07 0.00 [0.76, 1.05]
## 4 JS.BY.JS4 0.91*** 0.07 0.00 [0.77, 1.05]
## 5 JS.BY.JS5 15.23*** 1.14 0.00 [13.00, 17.46]
## 6 AC.BY.AC1 1.00 0.00 <NA> [1.00, 1.00]
## 7 AC.BY.AC2 1.24*** 0.07 0.00 [1.10, 1.37]
## 8 AC.BY.AC3 1.04*** 0.06 0.00 [0.93, 1.14]
## 9 AC.BY.AC4 1.15*** 0.06 0.00 [1.02, 1.27]
## 10 OC.BY.OC1 1.00 0.00 <NA> [1.00, 1.00]
## 11 OC.BY.OC2 1.33*** 0.11 0.00 [1.11, 1.54]
## 12 OC.BY.OC3 0.79*** 0.08 0.00 [0.64, 0.94]
## 13 OC.BY.OC4 1.17*** 0.10 0.00 [0.98, 1.37]
## 14 EP.BY.EP1 1.00 0.00 <NA> [1.00, 1.00]
## 15 EP.BY.EP2 1.04*** 0.08 0.00 [0.89, 1.19]
## 16 EP.BY.EP3 0.84*** 0.06 0.00 [0.72, 0.96]
## 17 EP.BY.EP4 0.92*** 0.07 0.00 [0.80, 1.05]
## 18 SI.BY.SI1 1.00 0.00 <NA> [1.00, 1.00]
## 19 SI.BY.SI2 1.08*** 0.05 0.00 [0.97, 1.18]
## 20 SI.BY.SI3 1.06*** 0.07 0.00 [0.93, 1.19]
## 21 SI.BY.SI4 1.16*** 0.06 0.00 [1.04, 1.28]
## 22 JS.ON.EP 0.20*** 0.05 0.00 [0.10, 0.29]
## 23 JS.ON.AC -0.01 0.05 0.85 [-0.11, 0.09]
## 24 OC.ON.EP 0.52*** 0.08 0.00 [0.37, 0.68]
## 25 OC.ON.AC 0.25*** 0.07 0.00 [0.12, 0.39]
## 26 OC.ON.JS 0.13 0.08 0.11 [-0.03, 0.28]
## 27 SI.ON.JS 0.09* 0.04 0.02 [0.02, 0.16]
## 28 SI.ON.OC 0.27*** 0.03 0.00 [0.21, 0.33]
## 29 AC.WITH.EP 0.37*** 0.09 0.00 [0.20, 0.54]
## 30 Variances.JS1 0.81*** 0.07 0.00 [0.67, 0.96]
## 31 Variances.JS2 0.82*** 0.08 0.00 [0.67, 0.97]
## 32 Variances.JS3 0.93*** 0.08 0.00 [0.78, 1.08]
## 33 Variances.JS4 0.82*** 0.07 0.00 [0.68, 0.96]
## 34 Variances.JS5 196.53*** 17.66 0.00 [161.93, 231.14]
## 35 Variances.AC1 0.63*** 0.06 0.00 [0.51, 0.74]
## 36 Variances.AC2 0.97*** 0.09 0.00 [0.79, 1.15]
## 37 Variances.AC3 0.60*** 0.06 0.00 [0.49, 0.72]
## 38 Variances.AC4 0.86*** 0.08 0.00 [0.71, 1.02]
## 39 Variances.OC1 4.24*** 0.32 0.00 [3.62, 4.87]
## 40 Variances.OC2 1.04*** 0.14 0.00 [0.76, 1.32]
## 41 Variances.OC3 1.75*** 0.14 0.00 [1.48, 2.02]
## 42 Variances.OC4 1.30*** 0.14 0.00 [1.03, 1.56]
## 43 Variances.EP1 1.77*** 0.14 0.00 [1.49, 2.06]
## 44 Variances.EP2 0.95*** 0.09 0.00 [0.76, 1.13]
## 45 Variances.EP3 0.68*** 0.06 0.00 [0.56, 0.81]
## 46 Variances.EP4 0.63*** 0.07 0.00 [0.50, 0.76]
## 47 Variances.SI1 0.26*** 0.02 0.00 [0.21, 0.30]
## 48 Variances.SI2 0.19*** 0.02 0.00 [0.15, 0.23]
## 49 Variances.SI3 0.47*** 0.04 0.00 [0.39, 0.54]
## 50 Variances.SI4 0.26*** 0.03 0.00 [0.21, 0.31]
## 51 Variances.JS 0.92*** 0.12 0.00 [0.69, 1.14]
## 52 Variances.AC 1.31*** 0.14 0.00 [1.04, 1.57]
## 53 Variances.OC 1.44*** 0.25 0.00 [0.96, 1.93]
## 54 Variances.EP 1.57*** 0.21 0.00 [1.15, 1.99]
## 55 Variances.SI 0.33*** 0.04 0.00 [0.25, 0.40]
#extract factor loadings of cfa model and SEM model1
inspect(fcfa,"std")$lambda
## JS AC OC EP SI
## JS1 0.740 0.000 0.000 0.000 0.000
## JS2 0.748 0.000 0.000 0.000 0.000
## JS3 0.680 0.000 0.000 0.000 0.000
## JS4 0.705 0.000 0.000 0.000 0.000
## JS5 0.731 0.000 0.000 0.000 0.000
## AC1 0.000 0.822 0.000 0.000 0.000
## AC2 0.000 0.820 0.000 0.000 0.000
## AC3 0.000 0.837 0.000 0.000 0.000
## AC4 0.000 0.815 0.000 0.000 0.000
## OC1 0.000 0.000 0.583 0.000 0.000
## OC2 0.000 0.000 0.886 0.000 0.000
## OC3 0.000 0.000 0.657 0.000 0.000
## OC4 0.000 0.000 0.836 0.000 0.000
## EP1 0.000 0.000 0.000 0.692 0.000
## EP2 0.000 0.000 0.000 0.803 0.000
## EP3 0.000 0.000 0.000 0.779 0.000
## EP4 0.000 0.000 0.000 0.823 0.000
## SI1 0.000 0.000 0.000 0.000 0.811
## SI2 0.000 0.000 0.000 0.000 0.864
## SI3 0.000 0.000 0.000 0.000 0.741
## SI4 0.000 0.000 0.000 0.000 0.852
loadingcfa<-inspect(fcfa,"std")$lambda
inspect(fit1_SEM,"std")$lambda
## JS AC OC EP SI
## JS1 0.739 0.000 0.000 0.000 0.000
## JS2 0.748 0.000 0.000 0.000 0.000
## JS3 0.680 0.000 0.000 0.000 0.000
## JS4 0.704 0.000 0.000 0.000 0.000
## JS5 0.732 0.000 0.000 0.000 0.000
## AC1 0.000 0.822 0.000 0.000 0.000
## AC2 0.000 0.820 0.000 0.000 0.000
## AC3 0.000 0.837 0.000 0.000 0.000
## AC4 0.000 0.816 0.000 0.000 0.000
## OC1 0.000 0.000 0.577 0.000 0.000
## OC2 0.000 0.000 0.885 0.000 0.000
## OC3 0.000 0.000 0.656 0.000 0.000
## OC4 0.000 0.000 0.832 0.000 0.000
## EP1 0.000 0.000 0.000 0.685 0.000
## EP2 0.000 0.000 0.000 0.802 0.000
## EP3 0.000 0.000 0.000 0.785 0.000
## EP4 0.000 0.000 0.000 0.824 0.000
## SI1 0.000 0.000 0.000 0.000 0.813
## SI2 0.000 0.000 0.000 0.000 0.869
## SI3 0.000 0.000 0.000 0.000 0.738
## SI4 0.000 0.000 0.000 0.000 0.848
loadingfit1<-inspect(fit1_SEM,"std")$lambda
# View fit measures
attributes(fitmeasures(fcfa))
## $names
## [1] "npar" "fmin" "chisq"
## [4] "df" "pvalue" "baseline.chisq"
## [7] "baseline.df" "baseline.pvalue" "cfi"
## [10] "tli" "nnfi" "rfi"
## [13] "nfi" "pnfi" "ifi"
## [16] "rni" "logl" "unrestricted.logl"
## [19] "aic" "bic" "ntotal"
## [22] "bic2" "rmsea" "rmsea.ci.lower"
## [25] "rmsea.ci.upper" "rmsea.ci.level" "rmsea.pvalue"
## [28] "rmsea.close.h0" "rmsea.notclose.pvalue" "rmsea.notclose.h0"
## [31] "rmr" "rmr_nomean" "srmr"
## [34] "srmr_bentler" "srmr_bentler_nomean" "crmr"
## [37] "crmr_nomean" "srmr_mplus" "srmr_mplus_nomean"
## [40] "cn_05" "cn_01" "gfi"
## [43] "agfi" "pgfi" "mfi"
## [46] "ecvi"
##
## $class
## [1] "lavaan.vector" "numeric"
# Select specific fit indices
cfaindices <- fitmeasures(fcfa,
fit.measures = c("chisq","df","pvalue","gfi","rmsea",
"rmsea.ci.lower","rmsea.ci.upper",
"rmr","srmr","nfi","nnfi","cfi",
"rfi","agfi","pnfi"))
fit1indices <- fitmeasures(fit1_SEM,
fit.measures = c("chisq","df","pvalue","gfi","rmsea",
"rmsea.ci.lower","rmsea.ci.upper",
"rmr","srmr","nfi","nnfi","cfi",
"rfi","agfi","pnfi"))
# Convert to data frame
cfaindices_df <- as.data.frame(cfaindices)
fit1indices_df <- as.data.frame(fit1indices)
options(scipen = 999)
# Combine into a single table
goftable <- cbind(cfaindices_df, fit1indices_df)
colnames(goftable) <- c("CFA Model", "Employee Retention Model")
round(goftable, 3)
## CFA Model Employee Retention Model
## chisq 240.600 287.040
## df 179.000 181.000
## pvalue 0.001 0.000
## gfi 0.947 0.938
## rmsea 0.029 0.038
## rmsea.ci.lower 0.019 0.030
## rmsea.ci.upper 0.038 0.046
## rmr 0.414 0.410
## srmr 0.036 0.060
## nfi 0.946 0.936
## nnfi 0.983 0.971
## cfi 0.985 0.975
## rfi 0.937 0.925
## agfi 0.932 0.921
## pnfi 0.806 0.806
m2<- '
JS =~ JS1+JS2+JS3+JS4+JS5
AC =~ AC1+AC2+AC3+AC4
OC =~ OC1+OC2+OC3+OC4
EP =~ EP1+EP2+EP3+EP4
SI =~ SI1+SI2+SI3+SI4
JS ~ EP +AC
OC ~ EP + AC + JS
SI ~ JS + OC +EP
EP ~~ AC
'
fit2_SEM<- sem(model=m2, data=hbat.sem)
summary(fit2_SEM, standardized=T, rsquare=T, fit.measures=T)
## lavaan 0.6-18 ended normally after 46 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 51
##
## Number of observations 400
##
## Model Test User Model:
##
## Test statistic 246.102
## Degrees of freedom 180
## P-value (Chi-square) 0.001
##
## Model Test Baseline Model:
##
## Test statistic 4452.408
## Degrees of freedom 210
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 0.984
## Tucker-Lewis Index (TLI) 0.982
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -13919.445
## Loglikelihood unrestricted model (H1) -13796.393
##
## Akaike (AIC) 27940.889
## Bayesian (BIC) 28144.454
## Sample-size adjusted Bayesian (SABIC) 27982.628
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.030
## 90 Percent confidence interval - lower 0.020
## 90 Percent confidence interval - upper 0.039
## P-value H_0: RMSEA <= 0.050 1.000
## P-value H_0: RMSEA >= 0.080 0.000
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.040
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## JS =~
## JS1 1.000 0.990 0.740
## JS2 1.033 0.076 13.680 0.000 1.023 0.748
## JS3 0.903 0.072 12.514 0.000 0.894 0.680
## JS4 0.910 0.070 12.953 0.000 0.902 0.705
## JS5 15.193 1.133 13.412 0.000 15.045 0.731
## AC =~
## AC1 1.000 1.144 0.822
## AC2 1.236 0.067 18.385 0.000 1.414 0.820
## AC3 1.037 0.055 18.846 0.000 1.186 0.837
## AC4 1.147 0.063 18.261 0.000 1.313 0.816
## OC =~
## OC1 1.000 1.468 0.582
## OC2 1.320 0.108 12.196 0.000 1.938 0.887
## OC3 0.782 0.076 10.288 0.000 1.148 0.655
## OC4 1.166 0.098 11.937 0.000 1.711 0.834
## EP =~
## EP1 1.000 1.266 0.692
## EP2 1.032 0.073 14.094 0.000 1.307 0.804
## EP3 0.820 0.060 13.738 0.000 1.038 0.778
## EP4 0.913 0.064 14.339 0.000 1.155 0.822
## SI =~
## SI1 1.000 0.706 0.811
## SI2 1.074 0.055 19.575 0.000 0.758 0.865
## SI3 1.065 0.066 16.040 0.000 0.751 0.741
## SI4 1.166 0.061 19.207 0.000 0.823 0.851
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## JS ~
## EP 0.192 0.049 3.934 0.000 0.245 0.245
## AC -0.012 0.051 -0.229 0.819 -0.013 -0.013
## OC ~
## EP 0.488 0.077 6.309 0.000 0.421 0.421
## AC 0.253 0.070 3.640 0.000 0.198 0.198
## JS 0.144 0.080 1.803 0.071 0.097 0.097
## SI ~
## JS 0.046 0.035 1.330 0.183 0.065 0.065
## OC 0.172 0.030 5.777 0.000 0.359 0.359
## EP 0.207 0.034 6.094 0.000 0.371 0.371
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## AC ~~
## EP 0.384 0.088 4.376 0.000 0.265 0.265
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .JS1 0.808 0.074 10.964 0.000 0.808 0.452
## .JS2 0.825 0.076 10.820 0.000 0.825 0.441
## .JS3 0.930 0.078 11.909 0.000 0.930 0.538
## .JS4 0.824 0.071 11.570 0.000 0.824 0.503
## .JS5 196.689 17.670 11.131 0.000 196.689 0.465
## .AC1 0.628 0.060 10.559 0.000 0.628 0.324
## .AC2 0.972 0.092 10.605 0.000 0.972 0.327
## .AC3 0.603 0.060 10.120 0.000 0.603 0.300
## .AC4 0.865 0.081 10.720 0.000 0.865 0.334
## .OC1 4.205 0.319 13.190 0.000 4.205 0.661
## .OC2 1.013 0.148 6.862 0.000 1.013 0.212
## .OC3 1.753 0.138 12.725 0.000 1.753 0.571
## .OC4 1.278 0.138 9.284 0.000 1.278 0.304
## .EP1 1.743 0.143 12.220 0.000 1.743 0.521
## .EP2 0.937 0.091 10.250 0.000 0.937 0.354
## .EP3 0.700 0.064 10.874 0.000 0.700 0.394
## .EP4 0.639 0.066 9.687 0.000 0.639 0.324
## .SI1 0.258 0.023 11.113 0.000 0.258 0.342
## .SI2 0.193 0.021 9.433 0.000 0.193 0.252
## .SI3 0.465 0.038 12.263 0.000 0.465 0.452
## .SI4 0.257 0.026 9.960 0.000 0.257 0.275
## .JS 0.923 0.116 7.967 0.000 0.941 0.941
## AC 1.309 0.136 9.661 0.000 1.000 1.000
## .OC 1.527 0.258 5.916 0.000 0.709 0.709
## EP 1.602 0.214 7.477 0.000 1.000 1.000
## .SI 0.287 0.033 8.812 0.000 0.576 0.576
##
## R-Square:
## Estimate
## JS1 0.548
## JS2 0.559
## JS3 0.462
## JS4 0.497
## JS5 0.535
## AC1 0.676
## AC2 0.673
## AC3 0.700
## AC4 0.666
## OC1 0.339
## OC2 0.788
## OC3 0.429
## OC4 0.696
## EP1 0.479
## EP2 0.646
## EP3 0.606
## EP4 0.676
## SI1 0.658
## SI2 0.748
## SI3 0.548
## SI4 0.725
## JS 0.059
## OC 0.291
## SI 0.424
fit2indices <- fitmeasures(fit2_SEM,
fit.measures = c("chisq","df","pvalue","gfi","rmsea",
"rmsea.ci.lower","rmsea.ci.upper",
"rmr","srmr","nfi","nnfi","cfi",
"rfi","agfi","pnfi"))
fit2_df <- as.data.frame(fit2indices)
cfa_df <- as.data.frame(cfaindices)
fit1_df <- as.data.frame(fit1indices)
options(scipen = 999)
# Combine
goftable2 <- cbind(cfa_df, fit1_df, fit2_df)
colnames(goftable2) <- c("CFA", "Retention", "Revised")
round(goftable2, 3)
## CFA Retention Revised
## chisq 240.600 287.040 246.102
## df 179.000 181.000 180.000
## pvalue 0.001 0.000 0.001
## gfi 0.947 0.938 0.945
## rmsea 0.029 0.038 0.030
## rmsea.ci.lower 0.019 0.030 0.020
## rmsea.ci.upper 0.038 0.046 0.039
## rmr 0.414 0.410 0.412
## srmr 0.036 0.060 0.040
## nfi 0.946 0.936 0.945
## nnfi 0.983 0.971 0.982
## cfi 0.985 0.975 0.984
## rfi 0.937 0.925 0.936
## agfi 0.932 0.921 0.930
## pnfi 0.806 0.806 0.810
semTools::reliability(fit2_SEM)
## JS AC OC EP SI
## alpha 0.2809951 0.8907652 0.8227408 0.8474340 0.8863459
## omega 0.6398382 0.8928876 0.8263299 0.8496126 0.8871281
## omega2 0.6398382 0.8928876 0.8263299 0.8496126 0.8871281
## omega3 0.6407007 0.8929064 0.8166366 0.8496064 0.8869025
## avevar 0.5347712 0.6772934 0.5518069 0.5873320 0.6633962
anova(fit1_SEM,fit2_SEM)
##
## Chi-Squared Difference Test
##
## Df AIC BIC Chisq Chisq diff RMSEA Df diff Pr(>Chisq)
## fit2_SEM 180 27941 28144 246.10
## fit1_SEM 181 27980 28179 287.04 40.938 0.31598 1 0.0000000001572 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
semPaths(fit2_SEM,what = "path", whatLabels = "std", residuals = T,rotation =2,layout = "tree2",style = "lisrel",groups = "latents")
# More simple plots
lavaanPlot(model = fit2_SEM, coefs = T)
# Easy parameter understanding
table_results(fit2_SEM)
## label est_sig se pval confint
## 1 JS.BY.JS1 1.00 0.00 <NA> [1.00, 1.00]
## 2 JS.BY.JS2 1.03*** 0.08 0.00 [0.88, 1.18]
## 3 JS.BY.JS3 0.90*** 0.07 0.00 [0.76, 1.04]
## 4 JS.BY.JS4 0.91*** 0.07 0.00 [0.77, 1.05]
## 5 JS.BY.JS5 15.19*** 1.13 0.00 [12.97, 17.41]
## 6 AC.BY.AC1 1.00 0.00 <NA> [1.00, 1.00]
## 7 AC.BY.AC2 1.24*** 0.07 0.00 [1.10, 1.37]
## 8 AC.BY.AC3 1.04*** 0.06 0.00 [0.93, 1.14]
## 9 AC.BY.AC4 1.15*** 0.06 0.00 [1.02, 1.27]
## 10 OC.BY.OC1 1.00 0.00 <NA> [1.00, 1.00]
## 11 OC.BY.OC2 1.32*** 0.11 0.00 [1.11, 1.53]
## 12 OC.BY.OC3 0.78*** 0.08 0.00 [0.63, 0.93]
## 13 OC.BY.OC4 1.17*** 0.10 0.00 [0.97, 1.36]
## 14 EP.BY.EP1 1.00 0.00 <NA> [1.00, 1.00]
## 15 EP.BY.EP2 1.03*** 0.07 0.00 [0.89, 1.18]
## 16 EP.BY.EP3 0.82*** 0.06 0.00 [0.70, 0.94]
## 17 EP.BY.EP4 0.91*** 0.06 0.00 [0.79, 1.04]
## 18 SI.BY.SI1 1.00 0.00 <NA> [1.00, 1.00]
## 19 SI.BY.SI2 1.07*** 0.05 0.00 [0.97, 1.18]
## 20 SI.BY.SI3 1.06*** 0.07 0.00 [0.93, 1.19]
## 21 SI.BY.SI4 1.17*** 0.06 0.00 [1.05, 1.29]
## 22 JS.ON.EP 0.19*** 0.05 0.00 [0.10, 0.29]
## 23 JS.ON.AC -0.01 0.05 0.82 [-0.11, 0.09]
## 24 OC.ON.EP 0.49*** 0.08 0.00 [0.34, 0.64]
## 25 OC.ON.AC 0.25*** 0.07 0.00 [0.12, 0.39]
## 26 OC.ON.JS 0.14 0.08 0.07 [-0.01, 0.30]
## 27 SI.ON.JS 0.05 0.03 0.18 [-0.02, 0.11]
## 28 SI.ON.OC 0.17*** 0.03 0.00 [0.11, 0.23]
## 29 SI.ON.EP 0.21*** 0.03 0.00 [0.14, 0.27]
## 30 AC.WITH.EP 0.38*** 0.09 0.00 [0.21, 0.56]
## 31 Variances.JS1 0.81*** 0.07 0.00 [0.66, 0.95]
## 32 Variances.JS2 0.83*** 0.08 0.00 [0.68, 0.97]
## 33 Variances.JS3 0.93*** 0.08 0.00 [0.78, 1.08]
## 34 Variances.JS4 0.82*** 0.07 0.00 [0.68, 0.96]
## 35 Variances.JS5 196.69*** 17.67 0.00 [162.06, 231.32]
## 36 Variances.AC1 0.63*** 0.06 0.00 [0.51, 0.74]
## 37 Variances.AC2 0.97*** 0.09 0.00 [0.79, 1.15]
## 38 Variances.AC3 0.60*** 0.06 0.00 [0.49, 0.72]
## 39 Variances.AC4 0.86*** 0.08 0.00 [0.71, 1.02]
## 40 Variances.OC1 4.21*** 0.32 0.00 [3.58, 4.83]
## 41 Variances.OC2 1.01*** 0.15 0.00 [0.72, 1.30]
## 42 Variances.OC3 1.75*** 0.14 0.00 [1.48, 2.02]
## 43 Variances.OC4 1.28*** 0.14 0.00 [1.01, 1.55]
## 44 Variances.EP1 1.74*** 0.14 0.00 [1.46, 2.02]
## 45 Variances.EP2 0.94*** 0.09 0.00 [0.76, 1.12]
## 46 Variances.EP3 0.70*** 0.06 0.00 [0.57, 0.83]
## 47 Variances.EP4 0.64*** 0.07 0.00 [0.51, 0.77]
## 48 Variances.SI1 0.26*** 0.02 0.00 [0.21, 0.30]
## 49 Variances.SI2 0.19*** 0.02 0.00 [0.15, 0.23]
## 50 Variances.SI3 0.46*** 0.04 0.00 [0.39, 0.54]
## 51 Variances.SI4 0.26*** 0.03 0.00 [0.21, 0.31]
## 52 Variances.JS 0.92*** 0.12 0.00 [0.70, 1.15]
## 53 Variances.AC 1.31*** 0.14 0.00 [1.04, 1.57]
## 54 Variances.OC 1.53*** 0.26 0.00 [1.02, 2.03]
## 55 Variances.EP 1.60*** 0.21 0.00 [1.18, 2.02]
## 56 Variances.SI 0.29*** 0.03 0.00 [0.22, 0.35]